28 research outputs found

    Regularized Neural User Model for Goal-Oriented Spoken Dialogue Systems

    Get PDF
    User simulation is widely used to generate artificial dialogues in order to train statistical spoken dialogue systems and perform evaluations. This paper presents a neural network approach for user modeling that exploits an encoder-decoder bidirectional architecture with a regularization layer for each dialogue act. In order to minimize the impact of data sparsity, the dialogue act space is compressed according to the user goal. Experiments on the Dialogue State Tracking Challenge 2 (DSTC2) dataset provide significant results at dialogue act and slot level predictions, outperforming previous neural user modeling approaches in terms of F1 score.Spanish Minister of Science under grants TIN2014-54288-C4-4-R and TIN2017-85854-C4-3-R and by the EU H2020 EMPATHIC project grant number 769872

    CLEF 2017 dynamic search evaluation lab overview

    Get PDF
    In this paper we provide an overview of the first edition of the CLEF Dynamic Search Lab. The CLEF Dynamic Search lab ran in the form of a workshop with the goal of approaching one key question: how can we evaluate dynamic search algorithms? Unlike static search algorithms, which essentially consider user request’s independently, and which do not adapt the ranking w.r.t the user’s sequence of interactions, dynamic search algorithms try to infer from the user’s intentions from their interactions and then adapt the ranking accordingly. Personalized session search, contextual search, and dialog systems often adopt such algorithms. This lab provides an opportunity for researchers to discuss the challenges faced when trying to measure and evaluate the performance of dynamic search algorithms, given the context of available corpora, simulations methods, and current evaluation metrics. To seed the discussion, a pilot task was run with the goal of producing search agents that could simulate the process of a user, interacting with a search system over the course of a search session. Herein, we describe the overall objectives of the CLEF 2017 Dynamic Search Lab, the resources created for the pilot task and the evaluation methodology adopted

    Nonstrict hierarchical reinforcement learning for interactive systems and robots

    Get PDF
    Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users

    The MATCH Corpus: A Corpus of Older and Younger Users' Interactions With Spoken Dialogue Systems.

    Get PDF
    We present the MATCH corpus, a unique data set of 447 dialogues in which 26 older and 24 younger adults interact with nine different spoken dialogue systems. The systems varied in the number of options presented and the confirmation strategy used. The corpus also contains information about the users’ cognitive abilities and detailed usability assessments of each dialogue system. The corpus, which was collected using a Wizard-of-Oz methodology, has been fully transcribed and annotated with dialogue acts and ‘‘Information State Update’’ (ISU) representations of dialogue context. Dialogue act and ISU annotations were performed semi-automatically. In addition to describing the corpus collection and annotation, we present a quantitative analysis of the interaction behaviour of older and younger users and discuss further applications of the corpus. We expect that the corpus will provide a key resource for modelling older people’s interaction with spoken dialogue systems

    A probabilistic framework for dialog simulation and optimal strategy learning

    No full text

    User and Noise Adaptive Dialogue Management Using Hybrid System Actions

    No full text
    International audienceIn recent years reinforcement-learning-based approaches have been widely used for management policy optimization in spoken dialogue systems (SDS). A dialogue management policy is a mapping from dialogue states to system actions, i.e. given the state of the dialogue the dialogue policy determines the next action to be performed by the dialogue manager. So-far policy optimization primarily focused on mapping the dialogue state to simple system actions (such as confirm or ask one piece of information) and the possibility of using complex system actions (such as confirm or ask several slots at the same time) has not been well investigated. In this paper we explore the possibilities of using complex (or hybrid) system actions for dialogue management and then discuss the impact of user experience and channel noise on complex action selection. Our experimental results obtained using simulated users reveal that user and noise adaptive hybrid action selection can perform better than dialogue policies which can only perform simple actions

    Machine Learning for Spoken Dialogue Management: An Experiment with Speech-Based Database Querying

    No full text
    International audienceAlthough speech and language processing techniques achieved a relative maturity during the last decade, designing a spoken dialogue system is still a tailoring task because of the great variability of factors to take into account. Rapid design and reusability across tasks of previous work is made very difficult. For these reasons, machine learning methods applied to dialogue strategy optimization has become a leading subject of researches since the mid 90's. In this paper, we describe an experiment of reinforcement learning applied to the optimization of speech-based database querying. We will especially emphasize on the sensibility of the method relatively to the dialogue modeling parameters in the framework of the Markov decision processes, namely the state space and the reinforcement signal. The evolution of the design will be exposed as well as results obtained on a simple real application

    Co-adaptation in Spoken Dialogue Systems

    No full text
    Abstract Spoken Dialogue Systems are man-machine interfaces which use speech as the medium of interaction. In recent years, dialogue optimization using reinforcement learning has evolved to be a state of the art technique. The primary focus of research in the dialogue domain is to learn some optimal policy with regard to the task description (reward function) and the user simulation being employed. However, in case of human-human interaction, the parties involved in the dialogue conversation mutually evolve over the period of interaction. This very ability of humans to co-adapt attributes largely towards increasing the naturalness of the dialogue. This paper outlines a novel framework for co-adaptation in spoken dialogue systems, where the dialogue manager and user simulation evolve over a period of time; they incrementally and mutually optimize their respective behaviors.

    Proceedings of The 4th Workshop on Machine Learning for Interactive Systems

    No full text
    International audienceThe goal of this workshop is to bring researchers from multiple disciplines together who are in one way or another affected by the gap between perception, action and communication that typically exists for data-driven interactive systems or robots. We aim to provide a forum for interdisciplinary discussion that allows researchers to look at their work from new perspectives that go beyond their core community and potentially develop new interdisciplinary collaborations driven by machine learning. A multidisciplinary viewpoint is important to develop agents with a holistic perspective of the world. This is also vital for the design of agents that solve large-scale and complex real-world problems in a principled way. Machine learning will stand at the core of the workshop as a common interest across researchers
    corecore